musk and hawking
Should the government regulate artificial intelligence? It already is
As nearly every day brings additional news about how artificial intelligence (AI) will affect the way we live, a heated debate has broken out over what the United States should do about it. On the one hand, the likes of Elon Musk and Stephen Hawking argue that we must regulate now to slow down and develop general principles governing AI's development because of its potential to cause massive economic dislocation and even destroy human civilization. On the other hand, AI advocates argue that there is no consensus on what AI is, let alone what it can ultimately do. Regulating AI in such circumstances, these advocates claim, will simply stifle innovation and cede to other countries the technological initiative that has done so much to power the U.S. economy. The intense focus on these foundational questions threatens to obscure, however, a key point: AI is already subject to regulation in many ways, and, even while the broader debates about AI continue, additional regulations look sure to follow.
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (0.70)
- Information Technology > Security & Privacy (0.51)
Guidelines for Preventing an AI Takeover Endorsed by Musk and Hawking
Two of modern science's most powerful voices, Elon Musk and Stephen Hawking, have both issued warnings about the dangers of artificial intelligence in the past (Musk has even been tinkering with ways humanity can augment themselves to keep up). But good news: Musk and Hawking are jumping onboard the ethical AI bandwagon. In an open letter published by Future of Life Institute (FLI) last Monday, Musk and Hawking joined several AI and robotics researchers in a comprehensive outline called "Asilomar AI principles" - 23 guidelines for avoiding an artificial intelligence armageddon. The goal is to guide AI research toward beneficial intelligence rather than "undirected intelligence." The principles are the product of the FLI's 2017 Beneficial AI conference.
BioEdge: Artificial concerns about artificial intelligence
Earlier this year, the American Information Technology and Innovation Foundation (ITIF) awarded their facetious'annual Luddite award' to a lose coalition of AI sceptics, including Tesla CEO Elon Musk and renown physicist Stephen Hawking. The ITIF labelled the likes of Musk and Hawking'alarmists' engaged in and "feverish hand-wringing about a looming artificial intelligence apocalypse". Yet the sarcastic gesture did not go down well. This week Nature published a scathing critique of the ITIF's'fanciful futurism', defending the'legitimate concerns' of Musk and Hawking. Ironically, the risks of AI are already being felt indirectly as universities lose young talent to the corporate sector.